129 research outputs found

    On the Number of Iterations for Dantzig-Wolfe Optimization and Packing-Covering Approximation Algorithms

    Get PDF
    We give a lower bound on the iteration complexity of a natural class of Lagrangean-relaxation algorithms for approximately solving packing/covering linear programs. We show that, given an input with mm random 0/1-constraints on nn variables, with high probability, any such algorithm requires Ω(ρlog(m)/ϵ2)\Omega(\rho \log(m)/\epsilon^2) iterations to compute a (1+ϵ)(1+\epsilon)-approximate solution, where ρ\rho is the width of the input. The bound is tight for a range of the parameters (m,n,ρ,ϵ)(m,n,\rho,\epsilon). The algorithms in the class include Dantzig-Wolfe decomposition, Benders' decomposition, Lagrangean relaxation as developed by Held and Karp [1971] for lower-bounding TSP, and many others (e.g. by Plotkin, Shmoys, and Tardos [1988] and Grigoriadis and Khachiyan [1996]). To prove the bound, we use a discrepancy argument to show an analogous lower bound on the support size of (1+ϵ)(1+\epsilon)-approximate mixed strategies for random two-player zero-sum 0/1-matrix games

    Optimal Reachability in Divergent Weighted Timed Games

    Full text link
    Weighted timed games are played by two players on a timed automaton equipped with weights: one player wants to minimise the accumulated weight while reaching a target, while the other has an opposite objective. Used in a reactive synthesis perspective, this quantitative extension of timed games allows one to measure the quality of controllers. Weighted timed games are notoriously difficult and quickly undecidable, even when restricted to non-negative weights. Decidability results exist for subclasses of one-clock games, and for a subclass with non-negative weights defined by a semantical restriction on the weights of cycles. In this work, we introduce the class of divergent weighted timed games as a generalisation of this semantical restriction to arbitrary weights. We show how to compute their optimal value, yielding the first decidable class of weighted timed games with negative weights and an arbitrary number of clocks. In addition, we prove that divergence can be decided in polynomial space. Last, we prove that for untimed games, this restriction yields a class of games for which the value can be computed in polynomial time

    The Complexity of Computing a Nash Equilibrium

    Full text link

    Polynomial-Time Amoeba Neighborhood Membership and Faster Localized Solving

    Full text link
    We derive efficient algorithms for coarse approximation of algebraic hypersurfaces, useful for estimating the distance between an input polynomial zero set and a given query point. Our methods work best on sparse polynomials of high degree (in any number of variables) but are nevertheless completely general. The underlying ideas, which we take the time to describe in an elementary way, come from tropical geometry. We thus reduce a hard algebraic problem to high-precision linear optimization, proving new upper and lower complexity estimates along the way.Comment: 15 pages, 9 figures. Submitted to a conference proceeding

    A deep cut ellipsoid algorithm for convex programming

    Get PDF
    This paper proposes a deep cut version of the ellipsoid algorithm for solving a general class of continuous convex programming problems. In each step the algorithm does not require more computational effort to construct these deep cuts than its corresponding central cut version. Rules that prevent some of the numerical instabilities and theoretical drawbacks usually associated with the algorithm are also provided. Moreover, for a large class of convex programs a simple proof of its rate of convergence is given and the relation with previously known results is discussed. Finally some computational results of the deep and central cut version of the algorithm applied to a min—max stochastic queue location problem are reported

    Advances in low-memory subgradient optimization

    Get PDF
    One of the main goals in the development of non-smooth optimization is to cope with high dimensional problems by decomposition, duality or Lagrangian relaxation which greatly reduces the number of variables at the cost of worsening differentiability of objective or constraints. Small or medium dimensionality of resulting non-smooth problems allows to use bundle-type algorithms to achieve higher rates of convergence and obtain higher accuracy, which of course came at the cost of additional memory requirements, typically of the order of n2, where n is the number of variables of non-smooth problem. However with the rapid development of more and more sophisticated models in industry, economy, finance, et all such memory requirements are becoming too hard to satisfy. It raised the interest in subgradient-based low-memory algorithms and later developments in this area significantly improved over their early variants still preserving O(n) memory requirements. To review these developments this chapter is devoted to the black-box subgradient algorithms with the minimal requirements for the storage of auxiliary results, which are necessary to execute these algorithms. To provide historical perspective this survey starts with the original result of N.Z. Shor which opened this field with the application to the classical transportation problem. The theoretical complexity bounds for smooth and non-smooth convex and quasi-convex optimization problems are briefly exposed in what follows to introduce to the relevant fundamentals of non-smooth optimization. Special attention in this section is given to the adaptive step-size policy which aims to attain lowest complexity bounds. Unfortunately the non-differentiability of objective function in convex optimization essentially slows down the theoretical low bounds for the rate of convergence in subgradient optimization compared to the smooth case but there are different modern techniques that allow to solve non-smooth convex optimization problems faster then dictate lower complexity bounds. In this work the particular attention is given to Nesterov smoothing technique, Nesterov Universal approach, and Legendre (saddle point) representation approach. The new results on Universal Mirror Prox algorithms represent the original parts of the survey. To demonstrate application of non-smooth convex optimization algorithms for solution of huge-scale extremal problems we consider convex optimization problems with non-smooth functional constraints and propose two adaptive Mirror Descent methods. The first method is of primal-dual variety and proved to be optimal in terms of lower oracle bounds for the class of Lipschitz-continuous convex objective and constraints. The advantages of application of this method to sparse Truss Topology Design problem are discussed in certain details. The second method can be applied for solution of convex and quasi-convex optimization problems and is optimal in a sense of complexity bounds. The conclusion part of the survey contains the important references that characterize recent developments of non-smooth convex optimization

    On the complexity of approximating the maximal inscribed ellipsoid for a polytope

    Full text link
    This paper published in "Communications in Statistics-Simulation and Computation" 19(3) (1990), 971-100

    Leximin Allocations in the Real World

    No full text
    corecore